33 research outputs found

    Effect of coitus at term on length of gestation, induction of labor, and mode of delivery.

    Get PDF
    OBJECTIVE: To determine coital incidence at term and to estimate its effect on labor onset and mode of delivery. METHODS: Healthy women with uncomplicated pregnancies and established gestational age were recruited to keep a diary of coital activity from 36 weeks of gestation until birth and to answer a short questionnaire. Two hundred women with complete coital diaries were available for analysis. Outcome measures include coitus, postdate pregnancy (defined as pregnancy beyond the estimated date of confinement), gestational length of at least 41 weeks, labor induction at 41 weeks of gestation, and mode of delivery. RESULTS: Reported sexual intercourse at term was influenced by a woman's perception of coital safety, her ethnicity, and her partner's age. After multivariable logistic regression analysis controlling for the women's ethnicity, education, occupation, perception of coital safety, and partner's age, coitus at term remained independently associated with reductions in postdate pregnancy (adjusted odds ratio [AOR] 0.28, 95% confidence interval [CI] 0.13-0.58, P = .001), gestational length of at least 41 weeks (AOR 0.10, 95% CI 0.04-0.28, P < .001), and requirement for labor induction at 41 weeks of gestation (AOR 0.08, 95% CI 0.03-0.26, P < .001). At 39 weeks of gestation, 5 (95% CI 3.3-10.3) couples needed to have intercourse to avoid one woman having to undergo labor induction at 41 weeks of gestation. Coitus at term had no significant effect on operative delivery (adjusted P = .15). CONCLUSION: Reported sexual intercourse at term was associated with earlier onset of labor and reduced requirement for labor induction at 41 weeks

    What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring

    Full text link
    Although systematic biases in decision-making are widely documented, the ways in which they emerge from different sources is less understood. We present a controlled experimental platform to study gender bias in hiring by decoupling the effect of world distribution (the gender breakdown of candidates in a specific profession) from bias in human decision-making. We explore the effectiveness of \textit{representation criteria}, fixed proportional display of candidates, as an intervention strategy for mitigation of gender bias by conducting experiments measuring human decision-makers' rankings for who they would recommend as potential hires. Experiments across professions with varying gender proportions show that balancing gender representation in candidate slates can correct biases for some professions where the world distribution is skewed, although doing so has no impact on other professions where human persistent preferences are at play. We show that the gender of the decision-maker, complexity of the decision-making task and over- and under-representation of genders in the candidate slate can all impact the final decision. By decoupling sources of bias, we can better isolate strategies for bias mitigation in human-in-the-loop systems.Comment: This paper has been accepted for publication at HCOMP 201

    Human-Guided Complexity-Controlled Abstractions

    Full text link
    Neural networks often learn task-specific latent representations that fail to generalize to novel settings or tasks. Conversely, humans learn discrete representations (i.e., concepts or words) at a variety of abstraction levels (e.g., "bird" vs. "sparrow") and deploy the appropriate abstraction based on task. Inspired by this, we train neural models to generate a spectrum of discrete representations, and control the complexity of the representations (roughly, how many bits are allocated for encoding inputs) by tuning the entropy of the distribution over representations. In finetuning experiments, using only a small number of labeled examples for a new task, we show that (1) tuning the representation to a task-appropriate complexity level supports the highest finetuning performance, and (2) in a human-participant study, users were able to identify the appropriate complexity level for a downstream task using visualizations of discrete representations. Our results indicate a promising direction for rapid model finetuning by leveraging human insight.Comment: NeurIPS 202

    Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-Time Policy Adaptation

    Full text link
    Policies often fail due to distribution shift -- changes in the state and reward that occur when a policy is deployed in new environments. Data augmentation can increase robustness by making the model invariant to task-irrelevant changes in the agent's observation. However, designers don't know which concepts are irrelevant a priori, especially when different end users have different preferences about how the task is performed. We propose an interactive framework to leverage feedback directly from the user to identify personalized task-irrelevant concepts. Our key idea is to generate counterfactual demonstrations that allow users to quickly identify possible task-relevant and irrelevant concepts. The knowledge of task-irrelevant concepts is then used to perform data augmentation and thus obtain a policy adapted to personalized user objectives. We present experiments validating our framework on discrete and continuous control tasks with real human users. Our method (1) enables users to better understand agent failure, (2) reduces the number of demonstrations required for fine-tuning, and (3) aligns the agent to individual user task preferences.Comment: International Conference on Machine Learning (ICML) 202

    Human-Machine Collaboration for Fast Land Cover Mapping

    Full text link
    We propose incorporating human labelers in a model fine-tuning system that provides immediate user feedback. In our framework, human labelers can interactively query model predictions on unlabeled data, choose which data to label, and see the resulting effect on the model's predictions. This bi-directional feedback loop allows humans to learn how the model responds to new data. Our hypothesis is that this rich feedback allows human labelers to create mental models that enable them to better choose which biases to introduce to the model. We compare human-selected points to points selected using standard active learning methods. We further investigate how the fine-tuning methodology impacts the human labelers' performance. We implement this framework for fine-tuning high-resolution land cover segmentation models. Specifically, we fine-tune a deep neural network -- trained to segment high-resolution aerial imagery into different land cover classes in Maryland, USA -- to a new spatial area in New York, USA. The tight loop turns the algorithm and the human operator into a hybrid system that can produce land cover maps of a large area much more efficiently than the traditional workflows. Our framework has applications in geospatial machine learning settings where there is a practically limitless supply of unlabeled data, of which only a small fraction can feasibly be labeled through human efforts.Comment: To appear in AAAI 202

    Causal relationship between Butyricimonas and allergic asthma: a two-sample Mendelian randomization study

    Get PDF
    BackgroundGrowing evidence has well documented the close association between the gut microbiome and allergic respiratory disease, which has been notably represented by allergic asthma. However, it is unclear whether this association is a causal link. Therefore, we investigated the potential causal associations between the gut microbiome and allergic asthma or other allergic diseases.MethodsIn this study, we performed two-sample Mendelian randomization (MR) analyses by using the publicly available genome-wide association study (GWAS) summary data. Single-nucleotide polymorphisms (SNPs) that significantly correlated were selected as instrumental variables. The inverse variance weighted (IVW) method was used to examine the potential causal gut microbial genera for allergic asthma and other allergic diseases. The robustness of the primary findings of the MR analyses was ensured by using different sensitivity analyses.ResultsCombining the findings from multiple analyses, the host genetic-driven increases in Butyricimonas at the genus level were positively correlated with the risk of allergic asthma. In addition, phylum Bacteroidetes and class Bacteroidia were also found to have negative associations with the risk of allergic asthma; genus Slackia was identified as having potential causal effects with allergic asthma. No clear evidence of pleiotropy and heterogeneity was observed in genus Butyricimonas. Butyricimonas was also found to have an association with allergic rhinitis, but not with other allergic diseases.ConclusionOur findings indicate that there are new gut microbial genera that were causally associated with the risk of allergic asthma and other allergic diseases, and offer novel insights into the pathogenesis of allergic respiratory diseases

    Getting aligned on representational alignment

    Full text link
    Biological and artificial information processing systems form representations that they can use to categorize, reason, plan, navigate, and make decisions. How can we measure the extent to which the representations formed by these diverse systems agree? Do similarities in representations then translate into similar behavior? How can a system's representations be modified to better match those of another system? These questions pertaining to the study of representational alignment are at the heart of some of the most active research areas in cognitive science, neuroscience, and machine learning. For example, cognitive scientists measure the representational alignment of multiple individuals to identify shared cognitive priors, neuroscientists align fMRI responses from multiple individuals into a shared representational space for group-level analyses, and ML researchers distill knowledge from teacher models into student models by increasing their alignment. Unfortunately, there is limited knowledge transfer between research communities interested in representational alignment, so progress in one field often ends up being rediscovered independently in another. Thus, greater cross-field communication would be advantageous. To improve communication between these fields, we propose a unifying framework that can serve as a common language between researchers studying representational alignment. We survey the literature from all three fields and demonstrate how prior work fits into this framework. Finally, we lay out open problems in representational alignment where progress can benefit all three of these fields. We hope that our work can catalyze cross-disciplinary collaboration and accelerate progress for all communities studying and developing information processing systems. We note that this is a working paper and encourage readers to reach out with their suggestions for future revisions.Comment: Working paper, changes to be made in upcoming revision

    Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Full text link
    Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems
    corecore